18 research outputs found

    Using high resolution optical imagery to detect earthquake-induced liquefaction: the 2011 Christchurch earthquake

    Get PDF
    Using automated supervised methods with satellite and aerial imageries for liquefaction mapping is a promising step in providing detailed and region-scale maps of liquefaction extent immediately after an earthquake. The accuracy of these methods depends on the quantity and quality of training samples and the number of available spectral bands. Digitizing a large number of high-quality training samples from an event may not be feasible in the desired timeframe for rapid response as the training pixels for each class should be typical and accurately represent the spectral diversity of that specific class. To perform automated classification for liquefaction detection, we need to understand how to build the optimal and accurate training dataset. Using multispectral optical imagery from the 22 February, 2011 Christchurch earthquake, we investigate the effects of quantity of high-quality training pixel samples as well as the number of spectral bands on the performance of a pixel-based parametric supervised maximum likelihood classifier for liquefaction detection. We find that the liquefaction surface effects are bimodal in terms of spectral signature and therefore, should be classified as either wet liquefaction or dry liquefaction. This is due to the difference in water content between these two modes. Using 5-fold cross-validation method, we evaluate performance of the classifier on datasets with different pixel sizes of 50, 100, 500, 2000, and 4000. Also, the effect of adding spectral information was investigated by adding once only the near infrared (NIR) band to the visible red, green, and blue (RGB) bands and the other time using all available 8 spectral bands of the World-View 2 satellite imagery. We find that the classifier has high accuracies (75%–95%) when using the 2000 pixels-size dataset that includes the RGB+NIR spectral bands and therefore, increasing to 4000 pixels-size dataset and/or eight spectral bands may not be worth the required time and cost. We also investigate accuracies of the classifier when using aerial imagery with same number of training pixels and either RGB or RGB+NIR bands and find that the classifier accuracies are higher when using satellite imagery with same number of training pixels and spectral information. The classifier identifies dry liquefaction with higher user accuracy than wet liquefaction across all evaluated scenarios. To improve classification performance for wet liquefaction detection, we also investigate adding geospatial information of building footprints to improve classification performance. We find that using a building footprint mask to remove them from the classification process, increases wet liquefaction user accuracy by roughly 10%.Published versio

    Model Validation of Recent Ground Motion Prediction Relations for Shallow Crustal Earthquakes in Active Tectonic Regions

    Get PDF
    Recent earthquake ground motion prediction relations, such as those developed from the Next Generation Attenuation of Ground Motions (NGA) project in 2008, have established a new baseline for the estimation of ground motion parameters such as peak ground acceleration (PGA), peak ground velocity (PGV), and spectral acceleration (Sa). When these models were published, very little was written about model validation or prediction accuracy. We perform statistical goodness-of-fit analyses to quantitatively compare the predictive abilities of these recent models. The prediction accuracy of the models is compared using several testing subsets of the master database used to develop the NGA models. In addition, we perform a blind comparison of the new models with previous simpler models, using ground motion records from the two most recent earthquakes of magnitude 6.0 or greater to strike mainland California: (1) the 2004 M 6.0 Parkfield earthquake, and (2) the 2003 M 6.5 San Simeon earthquake. By comparing the predictor variables and performance of different models, we discuss the sources of uncertainty in the estimates of ground motion parameters and offer recommendations for model development. This paper presents a model validation framework for assessing the prediction accuracy of ground motion prediction relations and aiding in their future development

    A Practical Approach for Implementing the Probability of Liquefaction in Performance Based Design

    Get PDF
    Empirical Liquefaction Models (ELMs) are the usual approach for predicting the occurrence of soil liquefaction. These ELMs are typically based on in situ index tests, such as the Standard Penetration Test (SPT) and Cone Penetration Test (CPT), and are broadly classified as deterministic and probabilistic models. The deterministic model provides a “yes/no” response to the question of whether or not a site will liquefy. However, Performance-Based Earthquake Engineering (PBEE) requires an estimate of the probability of liquefaction (PL) which is a quantitative and continuous measure of the severity of liquefaction. Probabilistic models are better suited for PBEE but are still not consistently used in routine engineering applications. This is primarily due to the limited guidance regarding which model to use, and the difficulty in interpreting the resulting probabilities. The practical implementation of a probabilistic model requires a threshold of liquefaction (THL). The researchers who have used probabilistic methods have either come up with subjective THL or have used the established deterministic curves to develop the THL. In this study, we compare the predictive performance of the various deterministic and probabilistic ELMs within a quantitative validation framework. We incorporate estimated costs associated with risk as well as with risk mitigation to interpret PL using precision and recall and to, compute the optimal THL using Precision- Recall (P-R) cost curve. We also provide the P-R cost curves for the popular probabilistic model developed using Bayesian updating for SPT and CPT data by Cetin et al. (2004) and Moss et al. (2006) respectively. These curves should be immediately useful to a geotechnical engineer who needs to choose the optimal THL that incorporates the costs associated with the risk of liquefaction and the costs associated with mitigation

    Site Response at Treasure and Yerba Buena Islands, California

    Get PDF

    Detecting demolished buildings after a natural hazard using high resolution RGB satellite imagery and modified U-Net convolutional neural networks

    Get PDF
    Collapsed buildings are usually linked with the highest number of human casualties reported after a natural disaster; therefore, quickly finding collapsed buildings can expedite rescue operations and save human lives. Recently, many researchers and agencies have tried to integrate satellite imagery into rapid response. The U.S. Defense Innovation Unit Experimental (DIUx) and National Geospatial Intelligence Agency (NGA) have recently released a ready-to-use dataset known as xView that contains thousands of labeled VHR RGB satellite imagery scenes with 30-cm spatial and 8-bit radiometric resolutions, respectively. Two of the labeled classes represent demolished buildings with 1067 instances and intact buildings with more than 300,000 instances, and both classes are associated with building footprints. In this study, we are using the xView imagery, with building labels (demolished and intact) to create a deep learning framework for classifying buildings as demolished or intact after a natural hazard event. We have used a modified U-Net style fully convolutional neural network (CNN). The results show that the proposed framework has 78% and 95% sensitivity in detecting the demolished and intact buildings, respectively, within the xView dataset. We have also tested the transferability and performance of the trained network on an independent dataset from the 19 September 2017 M 7.1 Pueblo earthquake in central Mexico using Google Earth imagery. To this end, we tested the network on 97 buildings including 10 demolished ones by feeding imagery and building footprints into the trained algorithm. The sensitivity for intact and demolished buildings was 89% and 60%, respectively.Published versio
    corecore